Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Ophthalmol Ther ; 2024 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-38642283

RESUMO

INTRODUCTION: The aim of this work is to identify patients at risk of limited access to healthcare through artificial intelligence using a name-ethnicity classifier (NEC) analyzing the clinical stage of cataract at diagnosis and preoperative visual acuity. METHODS: This retrospective, cross-sectional study includes patients seen in the cataract clinic of a tertiary care hospital between September 2017 and February 2020 with subsequent cataract surgery in at least one eye. We analyzed 4971 patients and 8542 eyes undergoing surgery. RESULTS: The NEC identified 360 patients with names classified as 'non-German' compared to 4611 classified as 'German'. Advanced cataract (7 vs. 5%; p = 0.025) was significantly associated with group 'non-German'. Mean best-corrected visual acuity in group 'non-German' was 0.464 ± 0.406 (LogMAR), and in group 'German' was 0.420 ± 0.334 (p = 0.009). This difference remained significant after exclusion of patients with non-lenticular ocular comorbidities. Surgical time and intraoperative complications did not differ between the groups. Retrobulbar or general anesthesia was chosen significantly more frequently over topical anesthesia in group 'non-German' compared to group 'German' (24 vs. 18% respectively; p < 0.001). CONCLUSIONS: This study shows that artificial intelligence is able to uncover health disparities between people with German compared to non-German names using NECs. Patients with non-German names, possibly facing various social barriers to healthcare access such as language barriers, have more advanced cataracts and worse visual acuity upon presentation. Artificial intelligence may prove useful for healthcare providers to discover and counteract such inequalities and establish tailored preventive measures to decrease morbidity in vulnerable population subgroups.

2.
AI Soc ; : 1-25, 2023 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-36789242

RESUMO

Uncovering the world's ethnic inequalities is hampered by a lack of ethnicity-annotated datasets. Name-ethnicity classifiers (NECs) can help, as they are able to infer people's ethnicities from their names. However, since the latest generation of NECs rely on machine learning and artificial intelligence (AI), they may suffer from the same racist and sexist biases found in many AIs. Therefore, this paper offers an algorithmic fairness audit of three NECs. It finds that the UK-Census-trained EthnicityEstimator displays large accuracy biases with regards to ethnicity, but relatively less among gender and age groups. In contrast, the Twitter-trained NamePrism and the Wikipedia-trained Ethnicolr are more balanced among ethnicity, but less among gender and age. We relate these biases to global power structures manifested in naming conventions and NECs' input distribution of names. To improve on the uncovered biases, we program a novel NEC, N2E, using fairness-aware AI techniques. We make N2E freely available at www.name-to-ethnicity.com. Supplementary Information: The online version contains supplementary material available at 10.1007/s00146-022-01619-4.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...